Остановите войну!
for scientists:
default search action
Search dblp
Full-text search
- > Home
Please enter a search query
- case-insensitive prefix search: default
e.g., sig matches "SIGIR" as well as "signal" - exact word search: append dollar sign ($) to word
e.g., graph$ matches "graph", but not "graphics" - boolean and: separate words by space
e.g., codd model - boolean or: connect words by pipe symbol (|)
e.g., graph|network
Update May 7, 2017: Please note that we had to disable the phrase search operator (.) and the boolean not operator (-) due to technical problems. For the time being, phrase search queries will yield regular prefix search result, and search terms preceded by a minus will be interpreted as regular (positive) search terms.
Author search results
no matches
Venue search results
no matches
Refine list
refine by author
- no options
- temporarily not available
refine by venue
- no options
- temporarily not available
refine by type
- no options
- temporarily not available
refine by access
- no options
- temporarily not available
refine by year
- no options
- temporarily not available
Publication search results
found 55 matches
- 2024
- Fali Wang, Tianxiang Zhao, Suhang Wang:
Distribution Consistency based Self-Training for Graph Neural Networks with Sparse Labels. WSDM 2024: 712-720 - Fali Wang, Tianxiang Zhao, Suhang Wang:
Distribution Consistency based Self-Training for Graph Neural Networks with Sparse Labels. CoRR abs/2401.10394 (2024) - Junbo Li, Zichen Miao, Qiang Qiu, Ruqi Zhang:
Training Bayesian Neural Networks with Sparse Subspace Variational Inference. CoRR abs/2402.11025 (2024) - 2023
- Horst Petschenig, Robert Legenstein:
Quantized rewiring: hardware-aware training of sparse deep neural networks. Neuromorph. Comput. Eng. 3(2): 24006 (2023) - Chao Fang, Wei Sun, Aojun Zhou, Zhongfeng Wang:
CEST: Computation-Efficient N:M Sparse Training for Deep Neural Networks. DATE 2023: 1-2 - Murtiza Ali, Aditya Arie Nugraha, Karan Nathwani:
Exploiting Sparse Recovery Algorithms for Semi-Supervised Training of Deep Neural Networks for Direction-of-Arrival Estimation. ICASSP 2023: 1-5 - Zirui Liu, Shengyuan Chen, Kaixiong Zhou, Daochen Zha, Xiao Huang, Xia Hu:
RSC: Accelerate Graph Neural Networks Training via Randomized Sparse Computations. ICML 2023: 21951-21968 - Mahdi Nikdan, Tommaso Pegolotti, Eugenia Iofinova, Eldar Kurtic, Dan Alistarh:
SparseProp: Efficient Sparse Backpropagation for Faster Training of Neural Networks at the Edge. ICML 2023: 26215-26227 - Ruibo Fan, Wei Wang, Xiaowen Chu:
Fast Sparse GPU Kernels for Accelerated Training of Graph Neural Networks. IPDPS 2023: 501-511 - Tiechui Yao, Jue Wang, Junyu Gu, Yumeng Shi, Fang Liu, Xiaoguang Wang, Yangang Wang, Xuebin Chi:
A Sparse Matrix Optimization Method for Graph Neural Networks Training. KSEM (1) 2023: 114-123 - Rainer Engelken:
SparseProp: Efficient Event-Based Simulation and Training of Sparse Recurrent Spiking Neural Networks. NeurIPS 2023 - Mahdi Nikdan, Tommaso Pegolotti, Eugenia Iofinova, Eldar Kurtic, Dan Alistarh:
SparseProp: Efficient Sparse Backpropagation for Faster Training of Neural Networks. CoRR abs/2302.04852 (2023) - Abhisek Kundu, Naveen K. Mellempudi, Dharma Teja Vooturi, Bharat Kaul, Pradeep Dubey:
AUTOSPARSE: Towards Automated Sparse Training of Deep Neural Networks. CoRR abs/2304.06941 (2023) - Rainer Engelken:
SparseProp: Efficient Event-Based Simulation and Training of Sparse Recurrent Spiking Neural Networks. CoRR abs/2312.17216 (2023) - 2022
- Zahra Atashgahi, Joost Pieterse, Shiwei Liu, Decebal Constantin Mocanu, Raymond N. J. Veldhuis, Mykola Pechenizkiy:
A brain-inspired algorithm for training highly sparse neural networks. Mach. Learn. 111(12): 4411-4452 (2022) - Shengwei Li, Zhiquan Lai, Dongsheng Li, Yiming Zhang, Xiangyu Ye, Yabo Duan:
EmbRace: Accelerating Sparse Communication for Distributed Training of Deep Neural Networks. ICPP 2022: 7:1-7:11 - Chuang Liu, Xueqi Ma, Yibing Zhan, Liang Ding, Dapeng Tao, Bo Du, Wenbin Hu, Danilo P. Mandic:
Comprehensive Graph Gradual Pruning for Sparse Training in Graph Neural Networks. CoRR abs/2207.08629 (2022) - Zirui Liu, Shengyuan Chen, Kaixiong Zhou, Daochen Zha, Xiao Huang, Xia Hu:
RSC: Accelerating Graph Neural Networks Training via Randomized Sparse Computations. CoRR abs/2210.10737 (2022) - 2021
- Noureldin Laban, Bassam Abdellatif, Hala M. Ebeid, Howida A. Shedeed, Mohamed F. Tolba:
Sparse Pixel Training of Convolutional Neural Networks for Land Cover Classification. IEEE Access 9: 52067-52078 (2021) - Shiwei Liu, Iftitahu Ni'mah, Vlado Menkovski, Decebal Constantin Mocanu, Mykola Pechenizkiy:
Efficient and effective training of sparse recurrent neural networks. Neural Comput. Appl. 33(15): 9625-9636 (2021) - Cesar F. Caiafa, Ziyao Wang, Jordi Solé-Casals, Qibin Zhao:
Learning From Incomplete Features by Simultaneous Training of Neural Networks and Sparse Coding. CVPR Workshops 2021: 2621-2630 - Jianchao Yang, Mei Wen, Minjin Tang, Junzhong Shen, Chunyuan Zhang:
SAI: Self-Adjusting Incremental Quantile Estimation for Sparse Training of Neural Networks on Hardware Accelerators. HPCC/DSS/SmartCity/DependSys 2021: 1049-1058 - Gunduz Vehbi Demirci, Hakan Ferhatosmanoglu:
Partitioning sparse deep neural networks for scalable training and inference. ICS 2021: 254-265 - Jeongwoo Park, Sunwoo Lee, Dongsuk Jeon:
A 40nm 4.81TFLOPS/W 8b Floating-Point Training Processor for Non-Sparse Neural Networks Using Shared Exponent Bias and 24-Way Fused Multiply-Add Tree. ISSCC 2021: 148-150 - Yi-Lin Sung, Varun Nair, Colin Raffel:
Training Neural Networks with Fixed Sparse Masks. NeurIPS 2021: 24193-24205 - Gunduz Vehbi Demirci, Hakan Ferhatosmanoglu:
Partitioning sparse deep neural networks for scalable training and inference. CoRR abs/2104.11805 (2021) - Lorenzo Chicchi, Lorenzo Giambagli, Lorenzo Buffoni, Timoteo Carletti, Marco Ciavarella, Duccio Fanelli:
On the training of sparse and dense deep neural networks: less parameters, same performance. CoRR abs/2106.09021 (2021) - Shengwei Li, Zhiquan Lai, Dongsheng Li, Xiangyu Ye, Yabo Duan:
EmbRace: Accelerating Sparse Communication for Distributed Training of NLP Neural Networks. CoRR abs/2110.09132 (2021) - Yi-Lin Sung, Varun Nair, Colin Raffel:
Training Neural Networks with Fixed Sparse Masks. CoRR abs/2111.09839 (2021) - 2020
- Pengcheng Dai, Jianlei Yang, Xucheng Ye, Xingzhou Cheng, Junyu Luo, Linghao Song, Yiran Chen, Weisheng Zhao:
SparseTrain: Exploiting Dataflow Sparsity for Efficient Convolutional Neural Networks Training. DAC 2020: 1-6
skipping 25 more matches
loading more results
failed to load more results, please try again later
manage site settings
To protect your privacy, all features that rely on external API calls from your browser are turned off by default. You need to opt-in for them to become active. All settings here will be stored as cookies with your web browser. For more information see our F.A.Q.
Unpaywalled article links
Add open access links from to the list of external document links (if available).
Privacy notice: By enabling the option above, your browser will contact the API of unpaywall.org to load hyperlinks to open access articles. Although we do not have any reason to believe that your call will be tracked, we do not have any control over how the remote server uses your data. So please proceed with care and consider checking the Unpaywall privacy policy.
Archived links via Wayback Machine
For web page which are no longer available, try to retrieve content from the of the Internet Archive (if available).
Privacy notice: By enabling the option above, your browser will contact the API of archive.org to check for archived content of web pages that are no longer available. Although we do not have any reason to believe that your call will be tracked, we do not have any control over how the remote server uses your data. So please proceed with care and consider checking the Internet Archive privacy policy.
Reference lists
Add a list of references from , , and to record detail pages.
load references from crossref.org and opencitations.net
Privacy notice: By enabling the option above, your browser will contact the APIs of crossref.org, opencitations.net, and semanticscholar.org to load article reference information. Although we do not have any reason to believe that your call will be tracked, we do not have any control over how the remote server uses your data. So please proceed with care and consider checking the Crossref privacy policy and the OpenCitations privacy policy, as well as the AI2 Privacy Policy covering Semantic Scholar.
Citation data
Add a list of citing articles from and to record detail pages.
load citations from opencitations.net
Privacy notice: By enabling the option above, your browser will contact the API of opencitations.net and semanticscholar.org to load citation information. Although we do not have any reason to believe that your call will be tracked, we do not have any control over how the remote server uses your data. So please proceed with care and consider checking the OpenCitations privacy policy as well as the AI2 Privacy Policy covering Semantic Scholar.
OpenAlex data
Load additional information about publications from .
Privacy notice: By enabling the option above, your browser will contact the API of openalex.org to load additional information. Although we do not have any reason to believe that your call will be tracked, we do not have any control over how the remote server uses your data. So please proceed with care and consider checking the information given by OpenAlex.
retrieved on 2024-05-22 17:37 CEST from data curated by the dblp team
all metadata released as open data under CC0 1.0 license
see also: Terms of Use | Privacy Policy | Imprint